Goto

Collaborating Authors

 health advice


Visual Authority and the Rhetoric of Health Misinformation: A Multimodal Analysis of Social Media Videos

Zarei, Mohammad Reza, Stead-Coyle, Barbara, Christensen, Michael, Everts, Sarah, Komeili, Majid

arXiv.org Artificial Intelligence

Short form video platforms are central sites for health advice, where alternative narratives mix useful, misleading, and harmful content. Rather than adjudicating truth, this study examines how credibility is packaged in nutrition and supplement videos by analyzing the intersection of authority signals, narrative techniques, and monetization. We assemble a cross platform corpus of 152 public videos from TikTok, Instagram, and YouTube and annotate each on 26 features spanning visual authority, presenter attributes, narrative strategies, and engagement cues. A transparent annotation pipeline integrates automatic speech recognition, principled frame selection, and a multimodal model, with human verification on a stratified subsample showing strong agreement. Descriptively, a confident single presenter in studio or home settings dominates, and clinical contexts are rare. Analytically, authority cues such as titles, slides and charts, and certificates frequently occur with persuasive elements including jargon, references, fear or urgency, critiques of mainstream medicine, and conspiracies, and with monetization including sales links and calls to subscribe. References and science like visuals often travel with emotive and oppositional narratives rather than signaling restraint.


Backprompting: Leveraging Synthetic Production Data for Health Advice Guardrails

Cheng, Kellen Tan, Gentile, Anna Lisa, DeLuca, Chad, Ren, Guang-Jie

arXiv.org Artificial Intelligence

The pervasiveness of large language models (LLMs) in enterprise settings has also brought forth a significant amount of risks associated with their usage. Guardrails technologies aim to mitigate this risk by filtering LLMs' input/output text through various detectors. However, developing and maintaining robust detectors faces many challenges, one of which is the difficulty in acquiring production-quality labeled data on real LLM outputs prior to deployment. In this work, we propose backprompting, a simple yet intuitive solution to generate production-like labeled data for health advice guardrails development. Furthermore, we pair our backprompting method with a sparse human-in-the-loop clustering technique to label the generated data. Our aim is to construct a parallel corpus roughly representative of the original dataset yet resembling real LLM output. We then infuse existing datasets with our synthetic examples to produce robust training data for our detector. We test our technique in one of the most difficult and nuanced guardrails: the identification of health advice in LLM output, and demonstrate improvement versus other solutions. Our detector is able to outperform GPT-4o by up to 3.73%, despite having 400x less parameters.


More Americans are turning to AI for health advice

FOX News

NVIDIA CEO and co-founder Jensen Huang commends President Donald Trump's A.I. agenda and outlines what the country's job future will look like on'Special Report.' Forget typing symptoms into a search bar. A growing number of Americans are now using artificial intelligence to manage their health and wellness. According to a nationwide survey of 2,000 U.S. adults, 35% report already relying on AI to understand and manage aspects of their well-being. From planning meals to getting fitness advice, AI is quickly moving from a futuristic concept to a daily health tool.


Study shows ChatGPT accurately provides health advice in 88.25% of cases

#artificialintelligence

A recent study has shed light on the reliability of ChatGPT, an artificial intelligence (AI) language model created by OpenAI, in providing accurate health advice. According to the research published in the journal Radiology, ChatGPT correctly offers health advice 88.25% of the time. Suggesting it could potentially be a helpful tool for individuals seeking health information. They evaluated the AI model's responses to a wide range of health-related questions. It is sourced from medical professionals, internet users, and medical literature.


HealthE: Classifying Entities in Online Textual Health Advice

Gatto, Joseph, Seegmiller, Parker, Johnston, Garrett, Preum, Sarah M.

arXiv.org Artificial Intelligence

The processing of entities in natural language is essential to many medical NLP systems. Unfortunately, existing datasets vastly under-represent the entities required to model public health relevant texts such as health advice often found on sites like WebMD. People rely on such information for personal health management and clinically relevant decision making. In this work, we release a new annotated dataset, HealthE, consisting of 6,756 health advice. HealthE has a more granular label space compared to existing medical NER corpora and contains annotation for diverse health phrases. Additionally, we introduce a new health entity classification model, EP S-BERT, which leverages textual context patterns in the classification of entity classes. EP S-BERT provides a 4-point increase in F1 score over the nearest baseline and a 34-point increase in F1 when compared to off-the-shelf medical NER tools trained to extract disease and medication mentions from clinical texts. All code and data are publicly available on Github.


Scope of Pre-trained Language Models for Detecting Conflicting Health Information

Gatto, Joseph, Basak, Madhusudan, Preum, Sarah M.

arXiv.org Artificial Intelligence

An increasing number of people now rely on online platforms to meet their health information needs. Thus identifying inconsistent or conflicting textual health information has become a safety-critical task. Health advice data poses a unique challenge where information that is accurate in the context of one diagnosis can be conflicting in the context of another. For example, people suffering from diabetes and hypertension often receive conflicting health advice on diet. This motivates the need for technologies which can provide contextualized, user-specific health advice. A crucial step towards contextualized advice is the ability to compare health advice statements and detect if and how they are conflicting. This is the task of health conflict detection (HCD). Given two pieces of health advice, the goal of HCD is to detect and categorize the type of conflict. It is a challenging task, as (i) automatically identifying and categorizing conflicts requires a deeper understanding of the semantics of the text, and (ii) the amount of available data is quite limited. In this study, we are the first to explore HCD in the context of pre-trained language models. We find that DeBERTa-v3 performs best with a mean F1 score of 0.68 across all experiments. We additionally investigate the challenges posed by different conflict types and how synthetic data improves a model's understanding of conflict-specific semantics. Finally, we highlight the difficulty in collecting real health conflicts and propose a human-in-the-loop synthetic data augmentation approach to expand existing HCD datasets. Our HCD training dataset is over 2x bigger than the existing HCD dataset and is made publicly available on Github.


Does Amazon have answers for the future of the NHS?

The Guardian

Enthusiasts predicted the plan would relieve the pressure on hard-pressed GPs. Critics saw it as a sign of creeping privatisation and a data-protection disaster in waiting. Reactions to news last month that Amazon's voice-controlled digital assistant Alexa was to begin using NHS website information to answer health queries were many and varied. US-based healthcare tech analysts say the deal is just the latest of a series of recent moves that together reveal an audacious, long-term strategy on the part of Amazon. From its entry into the lucrative prescription drugs market and development of AI tools to analyse patient records, to Alexa apps that manage diabetes and data-driven experiments on how to cut medical bills, the $900bn global giant's determination to make the digital disruption of healthcare a central part of its future business model is becoming increasingly clear.


NHS partners with Amazon to offer health advice via Alexa

#artificialintelligence

In a world-first, Amazon has partnered with the UK's health service, the NHS. From this week, its voice-controlled device, Alexa, will give out health advice, and answer common questions such as'Alexa, how do I treat a migraine?' and'Alexa, what are the symptoms of chickenpox?' In response to health-related queries, Alexa will now search the NHS Choices website for health information (and there you were thinking Amazon was all about Prime Day deals). The aim is to ease pressure on the NHS and help those who can't easily access information on the internet – such as the elderly or blind people. Will this partnership with Amazon really end up easing pressure on the health service, or will it lead to data protection issues and misdiagnoses? As we've previously explored, the use of voice interfaces is one of the fastest growing web design trends in recent years, but so far the news has been met with concerns over the appropriateness of using Alexa to deliver this kind of important and sensitive information.


NHS teams up with Amazon to bring Alexa to patients

The Guardian

The NHS has teamed up with Amazon to allow elderly people, blind people and other patients who cannot easily search for health advice on the internet to access the information through the AI-powered voice assistant Alexa. The health service hopes patients asking Alexa for health advice will ease pressure on the NHS, with Amazon's algorithm using information from the NHS website to provide answers to questions such as: "Alexa, how do I treat a migraine?"; 'Alexa, what are the symptoms of flu?'; and "Alexa what are the symptoms of chickenpox?" The Department of Health (DoH) said it would empower patients and hopefully reduce the pressure on the NHS by providing reliable information on common illnesses. The health secretary, Matt Hancock, said: "Technology like this is a great example of how people can access reliable, world-leading NHS advice from the comfort of their home, reducing the pressure on our hardworking GPs and pharmacists."


This artificial intelligence platform can provide health advice that is as accurate as a real doctor's

#artificialintelligence

A new artificial intelligence platform has demonstrated its ability to provide health advice that is as good as a human doctor's, according to research published on the preprint server arXiv.org. The technology, which has been developed by British company Babylon Health, takes the form of a mobile phone app, or website, that patients interact with via a chat service. The AI system has been put through rigorous testing that took place in collaboration with the U.K.'s Royal College of Physicians, as well as researchers from Stanford University and the Yale New Haven Health System. Part of this testing involved the AI taking a medical diagnosis exam that trainee primary care physicians in the U.K. must pass to be able to practice independently. Remarkably, the AI doctor scored 81 percent on its first attempt.